在过去的十年中,已经对对抗性的例子,旨在诱导机器学习模型中最坏情况行为的输入进行了广泛的研究。然而,我们对这一现象的理解源于相当零散的知识库。目前,有少数攻击,每个攻击在威胁模型中都有不同的假设和无与伦比的最优定义。在本文中,我们提出了一种系统的方法来表征最坏情况(即最佳)对手。我们首先通过将攻击组件雾化到表面和旅行者中,引入对抗机器学习中攻击的扩展分解。通过分解,我们列举了组件以创建576次攻击(以前没有探索568次攻击)。接下来,我们提出了帕累托合奏攻击(PEA):上限攻击性能的理论攻击。有了我们的新攻击,我们衡量相对于PEA的性能:稳健和非稳定模型,七个数据集和三个扩展的基于LP的威胁模型,其中包含计算成本,从而形式化了对抗性策略的空间。从我们的评估中,我们发现攻击性能是高度背景的:域,稳健性和威胁模型可以对攻击效率产生深远的影响。我们的调查表明,未来衡量机器学习安全性的研究应:(1)与域和威胁模型背景相关,并且(2)超越了当今使用的少数已知攻击。
translated by 谷歌翻译
机器学习容易受到对抗的示例 - 输入,旨在使模型表现不佳。但是,如果对逆势示例代表建模域中的现实输入,则尚不清楚。不同的域,如网络和网络钓鱼具有域制约束 - 在对手必须满足攻击方面必须满足要实现的攻击(除了任何对手特定的目标)之间的特征之间的复杂关系。在本文中,我们探讨了域限制如何限制对抗性能力以及对手如何适应创建现实(符合限制)示例的策略。在此,我们开发从数据学习域约束的技术,并展示如何将学习的约束集成到对抗性制作过程中。我们评估我们在网络入侵和网络钓鱼数据集中的方法的功效,并发现:(1)最多82%的对抗实例由最先进的制作算法产生的违规结构域约束,(2)域约束对对抗性鲁棒例子;强制约束产生模型精度的增加高达34%。我们不仅观察到对手必须改变投入以满足领域约束,但这些约束使得产生有效的对抗例子的产生远远挑战。
translated by 谷歌翻译
目前已在表征包含深层的学习模式进行部署到任何安全关键方案之前系统的错误行为越来越感兴趣。然而,表征此行为,通常需要模型,可以对复杂的现实世界的任务极其耗费计算的大规模测试。例如,任务涉及计算密集型对象检测器作为其组成部分之一。在这项工作中,我们提出了一个方法,使使用简化的低高保真模拟器高效的大规模测试和不执行昂贵的深度学习模型的计算成本。我们的方法依赖于设计测试对应的任务的计算密集型部件的高效替代模型。我们通过培训PIXOR和CenterPoint的激光雷达探测器有效的替代模型,同时证明了模拟的精度保持评估在卡拉模拟器减少计算费用的自动驾驶任务的表现证明了我们方法的有效性。
translated by 谷歌翻译
In security-sensitive applications, the success of machine learning depends on a thorough vetting of their resistance to adversarial data. In one pertinent, well-motivated attack scenario, an adversary may attempt to evade a deployed system at test time by carefully manipulating attack samples. In this work, we present a simple but effective gradientbased approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks. Following a recently proposed framework for security evaluation, we simulate attack scenarios that exhibit different risk levels for the classifier by increasing the attacker's knowledge of the system and her ability to manipulate attack samples. This gives the classifier designer a better picture of the classifier performance under evasion attacks, and allows him to perform a more informed model selection (or parameter setting). We evaluate our approach on the relevant security task of malware detection in PDF files, and show that such systems can be easily evaded. We also sketch some countermeasures suggested by our analysis.
translated by 谷歌翻译
We investigate a family of poisoning attacks against Support Vector Machines (SVM). Such attacks inject specially crafted training data that increases the SVM's test error. Central to the motivation for these attacks is the fact that most learning algorithms assume that their training data comes from a natural or well-behaved distribution. However, this assumption does not generally hold in security-sensitive settings. As we demonstrate, an intelligent adversary can, to some extent, predict the change of the SVM's decision function due to malicious input and use this ability to construct malicious data.The proposed attack uses a gradient ascent strategy in which the gradient is computed based on properties of the SVM's optimal solution. This method can be kernelized and enables the attack to be constructed in the input space even for non-linear kernels. We experimentally demonstrate that our gradient ascent procedure reliably identifies good local maxima of the non-convex validation error surface, which significantly increases the classifier's test error.
translated by 谷歌翻译